Header illustration

Posts about Spatial Computing

Comparing Spatial Apples with Oranges

Comparing the current Quest ecosystem to the very first iteration of Apple's Vision Pro doesn't make sense. This isn't because it wouldn't be a fair comparison, given that Vision Pro is still in its infancy, but rather because the Quest lacks an app ecosystem.

The word and definition of "app" do the heavy lifting in that sentence.

The Quest has a plethora of amazing experiences. It's the best platform for diving into literal virtual reality and shared (mostly gaming) experiences. While these are technically applications, they're not what people commonly think about as apps.

Spatial Computing

If you go ahead and browse through Meta's Quest Store, you'll find everything from drawing in the space around you, to virtual meeting spaces, to ways to learn piano and even a handful of solutions for those who want to work out while wearing their headset, for whatever reason. What you won't find are Slack, Discord, any kind of note-taking app, a way to write down tasks, or anything else you'd expect from your computer.

That's where Spatial Computing comes into play. The term is more than Apple slapping their branding magic onto something that was already available. Spatial Computing is computing. You, using a computer, in the space around you.

Let's consider the Xbox. Yes, tEcHniCalLy the Xbox is a computer but only pedants would call it that. It's a console and does a great job at being one. Sure, you can open a web browser on it but there's really no reason to do so on a regular basis. The Quest is more like a console than a computer.

A shared design language

As far as I know, there's not even a UI framework for Quest development. Most of it happens in Unity, which is notoriously bad when it comes to creating 2D interfaces. There are a handful of other solutions to get something running on Quest but there's no chance of Quest apps actually getting to a point where the system as a whole has a unified design language.

Which is fine for games/experiences, there's no unified design language between Fortnite and Assassin's Creed either. It doesn't matter, because they're unique experiences.

This is not fine for an operating system of a computing platform.

It might feel like a small and obvious thing that Apple allows iPad apps to run on Vision Pro but it isn't. On day one of Vision Pro, there were more apps to manage your calendar than after years and years of Quest's existence. Which makes sense, since one of them is a game console and the other is a computer. One of them more or less requires you to start your development in a game engine, the other allows you to create a good-looking, albeit basic todo app with 30 lines of code.

Can Meta massage Quest into becoming an app platform?

I guess?!

It would be a long way to get there and you'll arrive at the same "quality" of apps Android can try to boast about but it's possible in theory. Can Vision Pro be used for VR experiences? Sure. It's already possible, with the only caveat being that there are no available controllers at the moment.

Quest and Vision Pro only share one similarity: People strap displays to their heads to interact with software. Everything else is fundamentally different. It can be compared but there's not really a good reason to do so.

February 14th, 2024

An Involuntary Exercise in Patience

Disclaimer: This is going to sound extremely petty. Poor baby is frustrated because he logistically can’t spend €4000, oh no!

Attentive readers know that I’ve been talking about how VR and AR are going to be the future of interfaces for years. One of the main reasons for me finally getting into coding was knowing that I had to be able to manipulate real interfaces directly, especially when interface design is entering the third dimension.

Apple’s Vision Pro announcement came quite a few years after I first pronounced XR to be the future of computing. I immediately put the required money into a budget meant for Vision Pro as soon as it would become available. It was my goal to be there, on day one, playing around with the hardware, writing software, and experiencing the new frontier of interface design firsthand.

Well, that didn’t quite work out. I somehow didn’t expect Apple to only sell it in the US at first. That, combined with the fact that you need to go to an Apple Store to have your face and eyes measured for the device to be properly set up, creates a logistical hurdle that I’m not willing to jump over. I won’t fly to the US just to buy a device.

So, this is me, frustrated that I have to sit on the sidelines, watching other people discover a device I’ve been waiting for for years. This is obviously irrational because I’m not actually losing anything, except perhaps for a head start in spatial computing experience but still… it hurts a little.

At least it’s a good exercise in patience.

January 11th, 2024

Increased interactivity requires designers to code

It happened. The switch flipped and I‘m now one of those people who believe that it‘s far more productive to design in code than to move boxes and text in some design software.

I spent the last decade not wanting to believe the people who praised designers who code, but I‘m convinced now.

It‘s been about a year since I worked through 100 Days of SwiftUI. I built four iOS apps and about 4-5 web projects using React since then. I‘m obviously still a coding-baby but it‘s already very clear to me that being able to code made me a better designer.

AR interfaces are going to take this up a notch.

Three years ago, when I had an epiphany and realized that AR/VR interfaces are going to be the future of computing, I wondered how current design software would ever be able to allow me to do a good job designing AR interfaces.

I came to the conclusion that it wouldn‘t. It couldn’t.

If I would wait until the Figma of AR/VR interface design shows up, I‘d be behind the curve of what‘s going to be considered modern interfaces in the blink of an eye.

Fast forward to earlier this week, three years later.

I‘m downloading the visionOS SDK, after watching a couple of WWDC23 sessions about spatial computing and how to use SwiftUI and ARKit to create AR experiences for what has a good chance of becoming the AR platform of the future.

I was right.

You can‘t design AR experiences in Figma. Floaty 2D windows are only the baseline of what’ll be expected. The bare minimum.

True modern experiences will switch fluently between 2D windows and immersive experiences.

Designers need to be ready for it.

I spent the week playing around with visionOS, trying out interactions, building small apps and getting a literal grip on how to interact with 3D models in AR space and I‘m convinced that I‘d be utterly lost hadn‘t I spent the last couple of years working on what will be (is?) the required skillset for AR interface designers.

Designers need to understand 3D modeling, meshes, materials, textures, shaders, faces, vertices and edges. I knew nothing about any of this three years ago and it was already required knowledge in this very first week of AR interface experimentation.

Designers need to be able to code. 3D drag gestures, interactive 3D models, a blend of immersive experiences and 2D windows in real life environments can‘t be properly reproduced in some AR-Figma of the future. AR design is the climax of self-efficacy in interface design.

Designers need to adapt to be able to provide experiences that are as personal as spatial computing is going to be. They can’t be several degrees removed from what users are going to interact with anymore.

Being a designer who codes makes you a better designer in 2023.

Being a designer who doesn‘t code might make you a bad designer in 2024 and beyond.

June 26th, 2023

Self-Efficacy in Spatial Computing

There’s this concept of self-efficacy in psychology that really resonates with me. I see a lot of life through this lens. Here’s Wikipedia’s definition:

In psychology, self-efficacy is an individual’s belief in their capacity to act in the ways necessary to reach specific goals. A strong sense of self-efficacy promotes human accomplishment and personal well-being. A person with high self-efficacy views challenges as things that are supposed to be mastered rather than threats to avoid.

I believe that you can choose to be self-efficacious and things you do can make you feel self-efficacious. Most people fail to recognize when these moments occur, and even fewer make a conscious effort to intentionally create such moments for themselves.

Changing the physical world does this to humans. That’s one of the reasons so many people daydream about gardening and why pottery feels a bit like therapy. You create something that wasn’t there before. You moved something and it stayed in place. You’ve literally made a teeny-tiny dent in the universe.

You won’t be able to describe to a person who never experienced anything like it, how gardening makes you feel. Starting with nothing, spending hours of work, accepting failures and imperfections to then see a result of something you made, tickles the core of what we are. Sure, you can explain all the steps of the process and tell them you felt “good” doing so but there’s no way to describe the intensity of that feeling.

Turning a rotary dial to call someone, pressing buttons to control a SNES video game character and swiping and tapping on glass to send an email did this to us with ever increasing amounts of directness. Every evolution of digital technology helped us feel more self-efficacy.

For better or worse.

I think AR interfaces are the inevitable next step in computing because they make us feel more self-efficacious. You won’t be able to properly describe how moving digital windows in the physical space of real life made you feel. It’s counter-intuitive to even think that the way you interact with the window your bursting inbox makes a difference, yet it does.

Spatial computing can’t be described. It must be felt to be understood.

June 17th, 2023